American Journal of Infection Control
○ Elsevier BV
Preprints posted in the last 90 days, ranked by how well they match American Journal of Infection Control's content profile, based on 12 papers previously published here. The average preprint has a 0.01% match score for this journal, so anything above that is already an above-average fit.
Shinto, H.; Chowell, G.; Takayama, Y.; Ohki, Y.; Saito, K.; Mizumoto, K.
Show abstract
BackgroundIn long-term care facilities (LTCFs), close-contact identification often relies on staff recall and monitoring records because residents may be unable to self-report reliably. How these different record-generation processes relate to proximity-based sensor measurements in routine LTCF workflow remain unclear, and how such differences may influence contact-based decision-making in outbreak response is not well understood. MethodsWe conducted a five-day observational study in a Japanese LTCF using ultra-wideband (UWB) indoor positioning. Twenty-seven participants wore UWB tags, including 16 residents and 11 staff members; 10 staff members completed questionnaires. We compared UWB-derived proximity with questionnaire-derived contacts from staff self-report and monitoring-based proxy records, and assessed directional discrepancies under multiple distance-time thresholds. ResultsQuestionnaire-based records and UWB-derived proximity showed different patterns of discrepancy across contact types. Within this facility, resident-related monitoring-based proxy records showed relatively small directional discrepancies, whereas staff self-reports tended to identify additional resident-staff contacts under the baseline threshold ([≤]1.0 m for [≥]15 min). Several alternative thresholds were associated with discrepancies closer to zero than the baseline, although the apparent ranking varied by summary metric. ConclusionsIn this single-facility observational study, different contact-list generation processes were associated with different patterns of discrepancy relative to a proximity-based operational measure. These findings support interpretation in terms of workflow-specific contact-list generation rather than a single universally optimal threshold and may help inform facility-level review of contact identification practices in LTCFs. These findings support aligning contact identification strategies with facility-specific workflows to improve the feasibility and effectiveness of IPC practices in LTCFs.
Laskaris, Z.; Baron, S.; Markowitz, S. B.
Show abstract
ObjectivesRising temperatures are a major climate-related hazard for U.S. workers, increasing heat-related illness and a broad range of occupational injuries through indirect pathways often overlooked in economic evaluations. We examined the association between temperature and occupational injury and illness and quantified heat-attributable injuries (including illnesses) and costs in New York State. MethodsWe conducted a time-stratified case-crossover study of 591,257 workers compensation (WC) claims during the warm season (2016-2024). Daily maximum temperature was linked to injury date and county and modeled using natural cubic splines, with effect modification by industry and worker characteristics. ResultsInjury risk increased with temperature, becoming statistically significant at approximately 78{degrees}F. Relative to 65{degrees}F, injury odds increased to 1.06 (95% CI: 1.01-1.10) at 80{degrees}F, 1.12 (1.07-1.18) at 90{degrees}F, and 1.17 (1.11-1.23) at 95{degrees}F. Overall, 5.0% of claims (2,322 annually) were attributable to heat. At temperatures [≥]80{degrees}F, an estimated 1,729 excess injuries occurred annually, generating approximately $46 million in WC costs. An estimated $3.2 million to $36.1 million in medical expenditures were associated with incomplete claims, likely borne outside the WC system. ConclusionsThese findings demonstrate substantial economic costs not fully captured within WC and support workplace heat protections as a cost-containment strategy that can reduce health care spending and strengthen workforce resilience.
Blount, H.; Ward, J.; James, P. A.; Worsley, P. R.; Filingeri, D.; Koch Esteves, N.
Show abstract
Introduction. Climate change is increasing the frequency and intensity of heatwaves, creating critical challenges for social care settings where both staff and residents face heightened heat related vulnerability. This study examined the impact of heatwaves on UK care homes using a national survey of staff experiences, challenges, and adaptation strategies. Methods. Care home staff (N = 225) in managerial (N = 88) and caregiving roles (N = 137) completed an online survey investigating staff perceptions of heatwaves impact on thermal comfort, health and vulnerability of themselves and residents, alongside current heat resilience strategies and the barriers to their implementation. Results. Two thirds (66%) of the surveyed staff complained of being too hot three or more times per day resulting in a perceived impact on their ability to perform tasks (90%) and on residents' comfort and health (92%). Staff demonstrated strong awareness of older adults' heightened heat vulnerability (95%) and signs of heat illness (87%). Thematic analysis identified five key barriers to providing effective cooling: funding limitations, inadequate equipment, building constraints, staffing pressures, and individual resident needs; and four priority improvement areas: increased access to cooling equipment, improved temperature control, strengthened strategy and policy, and support for staff needs. Conclusions. Heatwaves place considerable strain on care homes, challenging staff capacity to maintain comfortable thermal conditions, despite good knowledge of heat risks. Financial, infrastructural, and staffing constraints limit effective heat resilience practices. Evaluating and implementing affordable, accessible, and context appropriate cooling strategies will be essential to protect both residents and staff as extreme heat events become more frequent.
Liu, L.; Huang, S. C.-H.; Hirata, A.; Jones, I.; Liu, N.; Shirai, J.; Zuidema, C.; Austin, E.; Seto, E.
Show abstract
Wildfire smoke (WFS) events are an important public health concern for communities in the Pacific Northwest of the United States. Previous studies of portable air cleaners, including high efficiency particulate air (HEPA) filtration and do-it-yourself (DIY) box fan filters built with MERV 13-rated filters, have indicated that their use in residential settings may be an effective way to reduce indoor exposures to fine particulate matter during WFS episodes. The lower-cost, easy to build instructions and availability of materials of DIY box fan filters have made their distribution by both public health agencies and community groups an attractive approach to improve community preparedness. Here, we describe a low-cost, easy-to-assemble, portable exposure chamber system that can be used to support a variety of community-engaged demonstrations of WFS removal efficiency as well as provide a mechanism to estimate the efficiency of filtration systems in a controlled environment. We conducted experiments using the portable chamber to assess the clean air delivery rate (CADR) of a MERV 13-rated DIY box fan filter, which was found to be 92.2 and 145.2 cfm at low and high fan speeds, respectively. In addition to using the chamber system to evaluate the CADR of DIY box fan filters, we also provide a case-study example, working with a tribal community in Central Washington, who used the tent system for a live demonstration of a DIY box fan filter experiment during their community gathering to promote WFS and air quality intervention knowledge and distribution of box fan filters.
Chhabra, S.; Nair, S.; Bramley, A.; Chee, J. Y.; Vignesvaran, K.; See, D. R. E.; Sun, L. J.; Ching, A. H.; Li,, A. Y.; Kayastha, G.; Chetchotisakd, P.; Cooper, B. S.; Charani, E.; Mo, Y.
Show abstract
Background Antibiotic use is prevalent in hospitals, driving the emergence of drug-resistant pathogens. We investigated the contextual influences on antibiotic prescribing behaviour across hospitals in high, middle, and low-income countries in Asia with an aim to provide actionable insights to improve prescribing behaviour. Methods We conducted a large qualitative study across ten institutions in Singapore, Nepal, and Thailand. Semi-structured interviews and ethnographic observations involving physicians, nurses, pharmacists, and management staff were conducted. Data were analysed thematically using QSR NVivo 14. Findings A total of 194 interviews were conducted amongst physicians (54{middle dot}1%), nurses (19{middle dot}6%), pharmacists (12{middle dot}4%), and management staff (13{middle dot}9%). Structural factors such as limited microbiology laboratory capabilities, concerns about antibiotic quality, weak infection prevention and control policies, and the lack of relevant, updated guidelines were prominent drivers for prolonged and broad-spectrum antibiotics prescriptions. Where these system supports were in place, prescribing decisions were less defensive and more targeted, although prescriber responsibility and concerns about immediate patient deterioration continued to influence practice. Across settings, clinicians tended to prioritise short-term perceived benefits of antibiotic treatment over the longer-term risks of antimicrobial resistance.
Yasir, M.; Willcox, M.
Show abstract
Endocavity ultrasound transducers, particularly transvaginal ultrasound (TVUS) probes, contain intricate structures such as notches, grooves, lens surfaces, and handle edges that are highly susceptible to microbial contamination yet difficult to disinfect using conventional high-level disinfection (HLD) methods. This study evaluated the efficacy of a novel ultraviolet-C light-emitting diode (UV-C LED) HLD system in eliminating microbial contamination from these complex probe surfaces. Two TVUS probes were sampled from predefined high-risk regions before and after disinfection following clinical use. Probe A was sampled at the top and bottom notches and both sides of the handle, while Probe B was assessed at the lens, edges, and bent groove regions. Microbial contamination was quantified using swab sampling, culture on agar plates, and identification via MALDI-TOF. Environmental sampling of examination and disinfection rooms was also performed. To assess this system robustness, probe sites were repeatedly inoculated with Bacillus subtilis spores and evaluated following UV-C treatment. Before UV-C treatment, contamination rates ranged from 25% to 57% across sampled regions, with microbial loads reaching up to 3.9 log CFU. Identified organisms included Staphylococcus epidermidis, Pseudomonas koreensis, Bacillus cereus, and Propionibacterium spp. Probe sheaths were also predominantly contaminated with Staphylococcus epidermidis., with counts reaching up to 4.3 log CFU, Environmental sampling revealed diverse microbiota, with higher contamination levels in examination rooms compared to disinfection areas. Following 90 seconds of UV-C exposure, no microbial growth was detected on any sampled site, indicating 100% decontamination. Additionally, UV-C treatment achieved >6.7 log reduction of B. subtilis spores across all tested regions. These findings demonstrate that UV-C LED technology provides rapid, effective, and consistent high-level disinfection of complex TVUS probe surfaces, supporting its potential as a rapid and reliable disinfection modality in clinical setting.
Saber, L. B.; Rojas, M.; Anderson, D. M.; Anderson, D. J.; Claus, H.; Cronk, R.; Linden, K. G.; Lott, M. E. J.; Radonovich, L. J.; Warren, B. G.; Williamson, R. D.; Vincent, R. L.; Gutierrez-Cortez, S.; Calderon Toledo, C.; Brown, J.
Show abstract
Hospital-acquired infections are a known and growing problem worldwide. Far-UVC is a novel disinfection method that inactivates bacteria with limited penetration into human skin or eyes. A clustered, unmatched, randomized control trial (RCT) will be implemented in two Bolivian hospitals. The intervention arm will receive functioning Far-UVC lamps, whereas the control arm will receive identical lamps that do not emit UV light (shams). Based on baseline data, 40 lamp fixtures will be installed above hospital sinks, 10 per arm per hospital. Environmental samples (air and surface swabs) will be collected and analyzed via culture and sequencing. Simultaneously, air chemical monitoring data will be collected.
Yoshida, H.; Adelman, M. W.; Rasmy, L.; Ifiora, F.; Xie, Z.; Perez, M. A.; Guerra, F.; Yoshimura, H.; Jones, S. L.; Arias, C. A.; Zhi, D.; Nigo, M.
Show abstract
BackgroundCandidemia is a rare but life-threatening bloodstream infection that remains difficult to predict using conventional risk stratification approaches, highlighting the need for improved predictive strategies. As a result, empiric antifungal therapy is often delayed even in high-risk patients. MethodsWe developed a deep learning model (PyTorch_EHR) to predict 7-day candidemia risk by using electronic health record data from two large cohorts (Houston Methodist Hospital System [HMHS] and MIMIC-IV), including adult inpatients who underwent at least one blood culture. Model performance was compared with logistic regression (LR), LightGBM, and established intensive care unit candidemia scores. We further implemented a two-step prediction framework integrating candidemia and 30-day mortality risk models to inform empiric antifungal decision-making. ResultsAmong 213,404 and 107,507 patients in the HMHS and MIMIC-IV cohorts, candidemia occurred in fewer than 1% (851 [0.4%] and 634 [0.6%], respectively). PyTorch_EHR outperformed LR, LightGBM, and existing candidemia scores, particularly in terms of area under the precision-recall curve (AUPRC) in HMHS and MIMIC-IV. By integrating 30-day mortality risk, the two-step framework identified an additional 20 and 28 candidemia cases beyond the one-step model, increasing coverage to 61% (121/199) and 46% (68/147) in HMHS and MIMIC-IV, respectively. Many patients identified by the two-step framework had high mortality yet did not receive empiric antifungal therapy (61.1% HMHS; 82.6% MIMIC-IV). ConclusionA two-step deep-learning framework integrating candidemia and mortality risk may support early identification of high-risk patients and facilitate timely empiric antifungal therapy. Prospective studies are warranted to confirm the findings.
Dovlatbekyan, N. M.; Ochakovskaya, I. N.; Penjoyan, A. G.; Durleshter, V. M.; Onopriev, V. V.; Avagimov, A. D.
Show abstract
Objective. To evaluate the effectiveness of a bundle of interventions involving a clinical pharmacologist aimed at changing surgeons approach to perioperative antibiotic prophylaxis (PAP) in an oncourology department. Materials and Methods. A single-center retrospective observational study was conducted. Data from 226 patients who underwent prostatectomy or nephrectomy in the oncourology department of Regional Clinical Hospital No. 2 (Krasnodar, Russia) between 2023 and 2025 were analyzed. Periods before (n=125) and after (n=101) the implementation of an Antimicrobial Stewardship (AMS) strategy bundle with active participation of a clinical pharmacologist (pre-authorization, audit with feedback, education, handshake stewardship) were compared. The primary endpoint was the proportion of surgeries performed in compliance with the PAP protocol. Secondary endpoints included the incidence of infectious complications, antibiotic consumption (DDD/100 bed-days), direct costs of antibacterial drugs, dynamics of the microbial landscape, and the Drug Resistance Index (DRI). Results. After AMS implementation, the proportion of surgeries performed in accordance with the PAP protocol increased from 0% to 47.7% for prostatectomies and to 55.6% for nephrectomies. The mean duration of antibiotic use decreased from 7 to 2 days (p<0.001). Antibiotic consumption decreased by 31.2%, and costs were reduced by a factor of 4.3. The proportion of ESKAPE organisms in the microbial profile decreased from 26.3% to 16.4%. There was no statistically significant increase in the frequency of infectious complications (2.4% vs. 3.0%; p=1.000) or mortality (0% in both groups). Conclusions. AMS implementation integrating a clinical pharmacologist into the oncourology department workflow significantly improved adherence to clinical guidelines, reduced irrational antibiotic use and financial costs without compromising patient safety. This approach can serve as a model for optimizing PAP in other surgical departments. Keywords: antibiotic prophylaxis, antimicrobial stewardship, drug resistance, clinical pharmacologist, cost-benefit analysis, oncourology
Ochakovskaya, I. N.; Onopriev, V. V.; Dovlatbekyan, N. M.; Zhuravleva, K. S.; Zamulin, G. Y.; Durleshter, V. M.
Show abstract
Objective. To evaluate the diagnostic and prognostic significance of C reactive protein (CRP) level dynamics within the first five days after surgery for the early detection of surgical site infections (SSI) and to identify independent risk factors, taking into account regional specifics of surgical management (types of surgeries, duration of procedures), as well as the local hospital microbial landscape. Materials and Methods. A single-center retrospective cohort analysis of data from 127 patients who underwent surgical procedures between 2022 and 2024 was conducted. CRP levels on postoperative days 1, 3, and 5 were assessed, and delta values were calculated. Descriptive statistics, ROC analysis, and multivariate logistic regression were used to identify predictors of SSI. Results. Patients with SSI lacked the physiological decrease in CRP levels by day 5. The most informative indicator was the CRP level on day 3: a threshold of >106 mg/L was associated with a high risk of SSI (AUC=0.76; sensitivity 85%, specificity 63%). Independent predictors of SSI included surgery duration (OR=1.015 per 1 min; p<0.001) and the increase in CRP between days 3 and 5 (delta CRP3-5: OR=1.027; p=0.023). A combined model (clinical parameters + CRP) demonstrated the highest predictive ability (AUC=0.79). Conclusion. Monitoring CRP dynamics, particularly on days 3 and 5, is a highly informative and accessible method for the early diagnosis of SSI. A CRP threshold of >100 mg/L on day 3 and its subsequent increase should serve as a trigger for in-depth diagnostic investigation and rationalization of antimicrobial therapy. Keywords: C reactive protein, postoperative complications, surgical site infection, antibiotic therapy, predictive factors, diagnosis
Law, A. H. T.; Wong, J. Y.; Lin, Y.; Cowling, B. J.; Wu, P.
Show abstract
BackgroundVariation in COVID-19 mortality rates by sex could have several explanations. We aimed to determine sex differences in infection and mortality patterns across different COVID-19 epidemics in Hong Kong, and to evaluate potential hypotheses. MethodsWe estimated age- and sex-specific incidence rates of cases, hospitalizations, and deaths per 100,000 population. Case-hospitalization, case-fatality risks (CFRs), and hospital-fatality risks of the COVID-19 pandemic were also estimated. Adjusted and unadjusted risks were estimated and compared to explore the relationships between mortality and health-related variables. We also explored the sex ratio of COVID-19 mortality rates of respiratory diseases from 2000 to 2019. ResultsHong Kong recorded 2876110 COVID-19 cases and 12737 deaths between January 2020 and January 2023, with 1317368 cases (45.8%) and 7523 (59.1%) fatal cases occurring in males. The incidence rate of cases was similar by sex across waves. The CFRs and hospital-fatality risks were higher in men across all waves. Males had a significantly higher mortality risk after adjusting for sex, COVID-19 vaccination status, and pre-existing chronic diseases. The ratio of COVID-19 mortality rates in men versus women from 2020 to 2023 was similar to the mortality ratio for other respiratory diseases in the pre-pandemic period. ConclusionsWhile infection rates were similar for males and females, males experienced higher mortality risks even after adjusting for differences in other known risk factors. COVID-19 shares a similar sex ratio of mortality with respiratory diseases excluding COVID-19.
Johnson, K. E.; Vega Yon, G.; Brand, S. P. C.; Bernal Zelaya, C.; Bayer, D.; Volkov, I.; Susswein, Z.; Magee, A.; Gostic, K. M.; English, K. M.; Ghinai, I.; Hamlet, A.; Olesen, S. W.; Pulliam, J.; Abbott, S.; Morris, D. H.
Show abstract
Infectious disease forecasts can inform public health decision-making. Wastewater monitoring is a relatively new epidemiological data source with multiple potential applications, including forecasting. Incorporating wastewater data into epidemiological forecasting models is challenging, and relatively few studies have assessed whether this improves forecast performance. We present and evaluate a semi-mechanistic wastewater-informed forecasting model. The model forecasts COVID-19 hospital admissions at the state and territorial levels in the United States, based on incident hospital admissions data and, optionally, SARS-CoV-2 wastewater concentration data from multiple wastewater sampling sites. From February through April 2024, we produced real-time wastewater-informed COVID-19 forecasts using development versions of the model and submitted them to the United States COVID-19 Forecast Hub ("the Hub"). We then published an open-source R package, wwinference, that implements the model with or without wastewater as an input. Using proper scoring rules and measures of model calibration, we assess both our real-time submissions to the Hub and retrospective hypothetical forecasts from wwinference made with and without wastewater data. While the models performed similarly with and without the wastewater signal included, there was substantial heterogeneity for individual locations and dates where wastewater data meaningfully improved or degraded the models forecast performance. Compared to other models submitted to the Hub during the period spanned by our submissions, the real-time wastewater-informed version of our model ranked fourth of 10 models, with the hospital admissions-only version of our model ranking second out of 10 models. Across the 2023-2024 winter epidemic wave, retrospective forecasts from wwinference would have performed similarly with and without the wastewater signal included: fifth and fourth out of 10 models, respectively. To better understand the drivers of differential forecast performance with and without wastewater, we performed an exploratory analysis investigating the relationship between characteristics of the input data and improved and reduced performance in our model. Based on that analysis, we identify and discuss key areas for further model development. To our knowledge, this is the first work that conducts an evaluation of real-time and retrospective infectious disease forecasts across the United States both with and without wastewater data and compared to other forecasting models. Author SummaryWastewater-based epidemiology, in combination with clinical surveillance, has the potential to improve situational awareness and inform outbreak responses. We developed a model that uses data on the pathogen concentration in wastewater from one or more wastewater treatment plants in combination with hospital admissions to produce short-term forecasts of hospital admissions. We produced and submitted forecasts of 28-day ahead COVID-19 hospital admissions from this model to the U.S. COVID-19 Forecast Hub during the spring of 2024 and found that it performed well in comparison to other models during that limited time period. To assess the added value of incorporating wastewater data into the model and to investigate how it would have performed had we submitted it during the entire 2023-2024 winter epidemic wave, we performed a retrospective analysis in which we produced forecasts from the model with and without including wastewater data, using data that would have been available in real-time as of each forecast date. Both versions of the model would have been median overall performers had they been submitted to the Hub throughout the season. When comparing the models performance with and without wastewater data included, we found that overall forecast performance was very similar, with wastewater data slightly reducing overall average forecast performance. Within this result, there was significant heterogeneity, with clear instances of wastewater data improving and detracting from forecast performance. We used trends in the observed data to generate hypotheses as to the drivers of improved and reduced relative forecast performance within our model. We conclude by suggesting future work to improve the model and more broadly the application of wastewater-based epidemiology to forecasting.
ABRAHAM, K. S.; RAVI, S. S. S.; VAJRAVELU, L. K.
Show abstract
Microbial keratitis is a sight-threatening corneal infection with varying etiological agents, primarily bacteria and fungi. Assessing and contrasting the virulence factors of microorganisms isolated from a non-contact lens-associated keratitis (NCLAK) and contact lens-associated keratitis (CLAK) is the goal of the current investigation. Samples were collected from over 60 patients and analysed using standard microbiological techniques, including culture, Gram staining, KOH mount, biochemical tests, antimicrobial susceptibility testing, and biofilm assays. The results demonstrated that CLAK isolates were predominantly bacterial, especially Pseudomonas aeruginosa, known for strong biofilm production and high multidrug resistance. In contrast, NCLAK showed a higher incidence of fungal infections, particularly Candida albicans. The results highlight the significance of early diagnosis, tailored and improved awareness regarding contact lens hygiene to prevent complications associated with keratitis.
Kennedy, J. C.; Furguson, W.; Jones, O.; Ward, T.; Riley, S.; Tang, M. L.; Mellor, J.
Show abstract
BackgroundEpidemic forecasting research often assesses ensembles and their component models using probabilistic scoring rules. Quantifying how individual models affect ensemble performance is challenging, particularly across multiple targets and spatial scales. MethodsWe present Winter 2024-25 forecasts of Influenza and COVID-19 hospital admissions in England and conduct a retrospective simulation using the operational component models. Forecasts were scored using the per capita weighted interval score (pcWIS) for counts and the ranked probability score (RPS) for ordinal trend direction. We compared operational retrospective forecasts, used generalised additive models (GAMs) to estimate the expected change in score from the inclusion of a model in a sub-ensemble, and used Pareto analysis to understand which sub-ensembles were Pareto-optimal across scoring rules. ResultsNationally, the Influenza and COVID-19 operational ensembles achieved pcWIS of 5.20 x 10-7 and 3.98x 10-7, with RPS of 0.234 and 0.171 respectively. This corresponds to a 47% improvement in score versus sub-ensembles for Influenza pcWIS. However, Influenza operational ensembles were 22% worse than sub-ensembles, on average, when measured by RPS. For COVID-10, operational ensembles were 43% and 265% worse on average, than retrospective sub-ensembles by pcWIS and RPS, respectively. The sub-ensemble simulation showed individual models influenced the ensembles during different epidemic phases. The Pareto analysis demonstrated that there can be a trade-off between relative direction and absolute count score optimisation. InterpretationOur analysis shows that UKHSA forecasts were well calibrated with observations and often had comparable performance to optimal ensembles. Our GAM and Pareto analyses inform model selection for future ensembles. Author SummaryForecasts of winter hospital pressures in England are an important tool for senior healthcare leaders. It is common practice to produce a forecasting ensemble, i.e. combine the predictions of multiple models to create a single, more accurate prediction. Forecasting teams should strive to produce the best forecast possible; one tool for this is retrospective evaluation over a forecasting season using proper scoring rules to assess performance. Our forecasts are constructed of two components, an epidemic trend direction estimate as well as forecast of hospital admission numbers. There are two main challenges we address. The first is understanding at which epidemic phase different ensemble contributions are most effective, the second is the joint optimisation of an ensemble for both trend direction and admission numbers forecast. We apply these methods to a variety of ensembles (sub-ensembles) based on our own modelling suite, and compare the sub-ensembles to our operational forecasts from the Winter 2024/25 season.
Marshall, N. P.; Chen, W.; Amrollahi, F.; Nateghi Haredasht, F.; Maddali, M. V.; Ma, S. P.; Zahedivash, A.; Black, K. C.; Chang, A.; Deresinski, S. C.; Goldstein, M. K.; Asch, S. M.; Banaei, N.; Chen, J. H.
Show abstract
BackgroundThe 2024 blood culture bottle shortage brought diagnostic resource allocation to the forefront, reflecting persistent, foundational challenges with low-value testing and empiric treatment approaches under clinical uncertainty. ObjectiveTo determine whether a machine learning approach using electronic medical record data can predict bacteremia more effectively than existing systems and practices to guide diagnostic testing and empiric treatment strategies. MethodsIn a retrospective cohort of 101,812 adult emergency department encounters (2015-2025), we first established an idealized cognitive baseline by evaluating physician and generative AI (GPT-5) application of the professional society-endorsed Fabre framework on a validation subset. We then trained an XGBoost model (Cultryx) on the full cohort to predict bacteremia, benchmarking its performance against real-world clinical heuristics (SIRS, Shapiro Rule). ResultsFor the idealized baseline, physicians applying the Fabre framework achieved 95.7% sensitivity, but GPT-5 automation failed to replicate this standard (71.6% sensitivity). In real-world benchmarking, Cultryx outperformed all clinical heuristics (AUROC 0.810). SIRS lacked specificity (41.2%), driving diagnostic overuse, while the Shapiro Rule lacked sensitivity (70.2%), missing ~30% of bacteremia cases. In contrast, when calibrated to a strict 95% sensitivity target, Cultryx achieved the highest culture volume deferral rate (26.2%, deferring ~ 15,872 bottles with predicted negative results) while maintaining a 98.9% negative predictive value. Cultryxscore, a simplified bedside tool, retained a 20.8% deferral rate. ConclusionsMachine learning provides a superior, data-driven alternative to mainstream clinical heuristics for predicting bacteremia. By maximizing culture deferment without compromising pathogen detection, Cultryx can conserve diagnostic resources, reduce unnecessary empiric antibiotic exposure, and systematically elevate patient safety. SummaryCultryx, a machine learning model for blood culture stewardship, outperforms standard clinical heuristics in predicting bacteremia. This approach could reduce culture utilization by over 26% while preserving pathogen detection, conserving diagnostic resources, reducing unnecessary antibiotic exposure, and elevating patient safety.
Van Benten, K. R.; Cooper, L.; Kirby, K.; Kruer, S.; Byron, K.
Show abstract
BACKGROUNDAutomated antimicrobial susceptibility testing (AST) systems are crucial for accurate, timely detection of drug-resistant microbial isolates. This meta-analysis assessed the performance of the BD Phoenix ("Phoenix", BD Diagnostic Solutions), Vitek(R) 2 ("Vitek 2", bioMerieux), and DxM MicroScan WalkAway ("MicroScan", Beckman Coulter, Inc.) AST systems relative to common reference methodology. METHODSA systematic literature search in Ovid (MEDLINE and Embase) yielded 275 unique (not duplicated) records, with 44 additional records retrieved from handsearching; 39 studies met inclusion criteria. Categorical agreement (CA), essential agreement (EA), very major errors (VMEs), and major errors (MEs) for the three instruments were compared to a common reference method. Ratios of proportions were analyzed using random-effect meta-regression. RESULTSThe instruments did not differ significantly in CA, EA, or ME. Vitek 2 showed a higher overall VME rate than Phoenix ([~]44% higher; Vitek 2-to-Phoenix ratio = 1.44; p=0.062 [approaching significance]) and MicroScan (74% higher; ratio = 1.74; p=0.045). No appreciable difference was observed for VME between Phoenix and MicroScan. Subgroup analyses should be interpreted cautiously due to limited overall significance indicating varying performance across systems. Vitek 2 generally had higher relative VMEs for gram-negative organisms and lower relative VMEs for gram-positive organisms, whereas Phoenix showed the opposite pattern. MicroScan had relatively low VMEs when stratified by Clinical and Laboratory Standards Institute (CLSI) criteria; no differences in VMEs were observed using European Committee on Antimicrobial Susceptibility Testing (EUCAST) criteria. CONCLUSIONAlthough some VME differences were noted, overall performance of the three systems was comparable. Organism- and drug-specific VME patterns--and updates to CLSI criteria over time--highlight the importance of continued monitoring of current breakpoints for all three instruments.
Kapos, I. P.
Show abstract
ABSTRACT Background: The UroLume endoprosthesis (AMS/Endo-care), commercially available 1988-2007 and FDA-approved in 1996, was positioned as a permanent minimally invasive solution for recurrent bulbar urethral stricture and benign prostatic hyperplasia (BPH). Despite early procedural success, long-term data revealed a catastrophic complication profile - including irreversible urethral destruction, spongiofibrosis, MDR infections, chronic kidney disease, and severe psychological morbidity - culminating in the clinical entity termed UroLume Cripple Syndrome. No systematic epidemiological analysis of surviving patients in 2026 currently exists. Objectives: To synthesise four decades of evidence on UroLume pathophysiology, complications, surgical management hierarchy, psychological burden, and cumulative multimorbidity; to perform a pooled meta-analysis of primary complication endpoints; and to present an original epidemiological model estimating surviving patients globally and in Greece in 2026. Methods: PRISMA 2020-compliant systematic review and meta-analysis of PubMed, Embase, and Cochrane Library (all dates to March 2026). Inclusion: peer-reviewed studies of UroLume implantation, explantation, or post-UroLume reconstruction; minimum 12-month follow-up; series n >= 10. Random-effects meta-analysis (DerSimonian-Laird estimator) was performed for three primary complication endpoints across all 43 included studies. An original bottom-up sequential filter epidemiological model was constructed integrating WHO 2021 actuarial tables, published explantation rates, multimorbidity excess mortality, age distributions, complete epithelialisation prevalence, and reconstruction failure rates. Results: Forty-three studies met inclusion criteria (n=3,847 patients). Pooled meta-analysis yielded: restenosis/tissue ingrowth 37.9% (95% CI 36.1%-39.8%, I2=0%); stent explantation 8.7% (95% CI 7.7%-9.8%, I2=0%); urinary incontinence 9.7% (95% CI 8.7%-10.9%, I2=0%). Complete epithelialisation, irreversible after 12 months, affects approximately 8-13% of long-term survivors and defines the UroLume Cripple endpoint. Post-UroLume buccal mucosa graft urethroplasty achieves 76.7% success at 5 years when explantation is feasible. Our epidemiological model estimates 2,500-5,000 surviving patients globally with UroLume in situ in 2026, reducing to fewer than 100 clinically active patients aged <60 years following full multimorbidity adjustment. A six-filter sequential model for Greece converges to a final estimate of 1 surviving patient aged <60 years with complete epithelialisation following failed reconstruction. Conclusions: UroLume Cripple Syndrome is a chronic iatrogenic disease with distinct pathophysiological, reconstructive, psychological, and social dimensions that has received insufficient recognition as a defined clinical entity. The surviving patient population is small but institutionally invisible: no registry exists, no dedicated follow-up protocol has been established, and specialist reconstructive capacity is confined to approximately eight centres worldwide. Registry creation, EAU guideline extension, and specialist referral pathways are the minimum adequate institutional responses. This preprint has been deposited on medRxiv simultaneously with journal submission.
Sabin, L. L.; West, R. L.; Coffin, S. E.; Machona, S.; Cowden, C.; Mwananyanda, L.; Lukwesa-Musyani, C.; Tembo, J.; Bates, M.; Hamer, D. H.
Show abstract
The Sepsis Prevention in Neonates in Zambia (SPINZ) trial was a prospective observational cohort study conducted in the neonatal intensive care unit of the University Teaching Hospital in Lusaka, Zambia. Introduction of an infection prevention and control (IPC) bundle reduced hospital-associated mortality, total mortality, suspected sepsis, and confirmed bloodstream infections. This companion analysis was undertaken to analyze intervention costs and cost-effectiveness in this low-resource setting. We conducted a retrospective cost analysis, using SPINZ study-related records, and expressed costs in real 2016 US dollars. We also estimated intervention cost-effectiveness using both outcomes from SPINZ (avoided deaths, confirmed infections, and suspected episodes of infection) and estimated disability-adjusted life years (DALYs) averted by the intervention. To provide data for policymakers, a future cost projection was undertaken to estimate costs of the program implemented nationally over a 10-year period in real 2025 US dollars. A total of 2,035 neonates were enrolled from September 2015 to March 2017. Total costs during implementation (introduction of the IPC bundle) (April-May 2016) and the subsequent intervention period were $17,641 and $5,265, respectively, of which most expenses were incurred during the preparation period due to travel and training. During the intervention period, the programs running cost was approximately $478 per month. The estimated cost per death, confirmed infection, and suspected episode averted was $208, $204, and $32, respectively; the estimated cost per DALY averted was $9.5. The future model was estimated to cost an average of $107,561 annually to implement nationally. The analysis indicated that the IPC bundle to prevent sepsis-related neonatal mortality was highly cost-effective. Cost reductions from task-shifting, reduced preparation (start-up) costs, and longer intervention periods would further decrease cost per death averted. IPC bundle implementation can thus be recommended for resource-constrained settings where sepsis and other nosocomial infections are associated with high neonatal mortality.
Dubey, A. K.; Reyes, J.; Rhiner, C.; Drescher, K.; Dunkel, J.; McKinney, J. D.; Egli, A.
Show abstract
ObjectivesTo quantify how urine sample type and polymicrobial context impact antimicrobial resistance (AMR) in urinary tract infections (UTIs), using routine diagnostics at scale. MethodsIn this retrospective, single-centre study, we analysed 188,687 urine cultures from the Institute of Medical Microbiology, University of Zurich, Switzerland (January 2015 to May 2023). We compared midstream urine (MU), indwelling catheter (IDC), and intermittent catheter (IMC) samples. Samples were classified as negative, bacteriuria, or UTI, by meeting a microbiological UTI threshold ([≥]105 CFU/mL). We compared sample types using covariate-adjusted regression and constrained ordination for community composition. In bimicrobial cultures, we assessed co-occurrence using adjusted pairwise odds ratios and degree-preserving permutation null models, supported by partner-choice analyses. AMR was modelled as acquired resistance (AR) and total resistance (TR: acquired + intrinsic) probabilities, with predictor contributions quantified using mutual information. ResultsAmong 186,819 MU, IMC, IDC samples, 56,867 met the UTI threshold. Catheter-associated UTIs (IDC and IMC) were ~60% more likely to be polymicrobial than MU samples. Community composition differed by sample type (p<0{middle dot}001). In IDC, Escherichia coli was less prevalent than in MU, but device-associated pathogens like Pseudomonas aeruginosa and Candida albicans were enriched. Most species-pairs showed no increased co-occurrence after adjusting for covariates, but a subset showed reproducible enrichment across methods (e.g., C. albicans-C. glabrata). Organism identity was the dominant determinant of AMR, with the highest mutual information across AR and TR. AR was higher in IDC for common uropathogens (e.g., E. coli). Co-isolation with hospital-associated partners (e.g., Enterococcus faecium) was associated with further AR increase. From 2015 to 2023, AR increased from ~48% to ~60%, with rising {beta}-lactam (+{beta}-lactamase inhibitor) resistance and declining fluoroquinolone resistance in Enterobacterales. ConclusionsSample type and co-isolated partners provide clinically actionable information beyond pathogen identity and could support more context-aware reporting and empiric prescribing.
Davidson, R.; Heinstein, C.; Patriquin, G.; Goneau, L. W.; Brown, L. A.; Hill, B.
Show abstract
This dual-center study evaluated the impact of artificial intelligence (AI) on urine culture turnaround times in Canadian diagnostic laboratories employing full microbiology laboratory automation. Data were collected before and after the implementation of PhenoMATRIX (PM), an AI-based software designed to support culture sorting and result interpretation. In both a low-volume tertiary care hospital and a high-volume community laboratory, PM reduced the time to final culture reporting, with decreases of approximately 1.5 hours and 3.9 hours, respectively. Implementation of PM+, which automatically releases defined results to patient charts, further improved turnaround time. These findings indicate that microbiology laboratories with full laboratory automation can achieve further improvements in turnaround time by integrating AI-culture assessment and results release.